dwzhu commited on
Commit
3319087
1 Parent(s): 6e34664

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -23
README.md CHANGED
@@ -75,42 +75,38 @@ data_list = load_dataset(path="dwzhu/LongEmbed", name="dataset_name", split="spl
75
 
76
  #### Evaluation
77
 
78
- The evaluation of LongEmbed can be easily conducted using MTEB. For the four real tasks, you can evaluate as follows:
79
 
80
  ```python
81
- from tabulate import tabulate
82
  from mteb import MTEB
83
-
84
  retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
85
- retrieval_task_results = []
86
-
87
  evaluation = MTEB(tasks=retrieval_task_list)
 
88
  results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
89
-
90
  for key, value in results.items():
91
  split = "test" if "test" in value else "validation"
92
- retrieval_task_results.append([key, value[split]["ndcg_at_1"], value[split]["ndcg_at_10"]])
93
  output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
94
-
95
- print(tabulate(retrieval_task_results, headers=["Task", "NDCG@1", "NDCG@10"]))
96
  ```
97
 
98
- For the two synthetic tasks, since we examine a broad context range of $ \{0.25,0.5,1,2,4,8,16,32\}\times1024 $ tokens, an additional parameter of `context_length` is required. You may evaluate as follows:
99
 
100
  ```python
101
- from tabulate import tabulate
102
  from mteb import MTEB
103
-
104
- needle_passkey_score_list = []
105
- for ctx_len in [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]:
106
- print(f"Running task: NeedlesRetrieval, PasskeyRetrieval, context length: {ctx_len}")
107
- evaluation = MTEB(tasks=["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"])
108
- results = evaluation.run(model, context_length=ctx_len,overwrite_results=True,batch_size=args.batch_size)
109
- needle_passkey_score_list.append([ctx_len, results["LEMBNeedleRetrieval"]["test"]["ndcg_at_1"], results["LEMBPasskeyRetrieval"]["test"]["ndcg_at_1"]])
110
-
111
- needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list), sum([x[2] for x in needle_passkey_score_list])/len(context_length_list)])
112
-
113
- print(tabulate(needle_passkey_score_list, headers=["Context Length", "Needle-ACC", "Passkey-ACC"]))
 
 
114
  ```
115
 
116
  ## Task Description
@@ -122,7 +118,7 @@ LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and su
122
  - [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
123
  - [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).
124
 
125
- We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of $\{0.25,0.5,1,2,4,8,16,32\}\times1024$ tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.
126
 
127
 
128
  ## Task Statistics
 
75
 
76
  #### Evaluation
77
 
78
+ The evaluation of LongEmbed can be easily conducted using MTEB (>=1.6.22). For the four real tasks, you can evaluate as follows:
79
 
80
  ```python
 
81
  from mteb import MTEB
 
82
  retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"]
83
+ output_dict = {}
 
84
  evaluation = MTEB(tasks=retrieval_task_list)
85
+ #TODO load the model before evaluation
86
  results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0)
 
87
  for key, value in results.items():
88
  split = "test" if "test" in value else "validation"
 
89
  output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]}
90
+ print(output_dict)
 
91
  ```
92
 
93
+ For the two synthetic tasks, since we examine a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens, an additional parameter of `context_length` is required. You may evaluate as follows:
94
 
95
  ```python
 
96
  from mteb import MTEB
97
+ needle_passkey_task_list = ["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"]
98
+ output_dict = {}
99
+ context_length_list = [256, 512, 1024, 2048, 4096, 8192, 16384, 32768]
100
+ evaluation = MTEB(tasks=needle_passkey_task_list)
101
+ #TODO load the model before evaluation
102
+ results = evaluation.run(model, output_folder=args.output_dir, overwrite_results=True,batch_size=args.batch_size,verbosity=0)
103
+ for key, value in results.items():
104
+ needle_passkey_score_list = []
105
+ for ctx_len in context_length_list:
106
+ needle_passkey_score_list.append([ctx_len, value[f"test_{ctx_len}"]["ndcg_at_1"]])
107
+ needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list)])
108
+ output_dict[key] = {item[0]: item[1] for item in needle_passkey_score_list}
109
+ print(output_dict)
110
  ```
111
 
112
  ## Task Description
 
118
  - [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set.
119
  - [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls).
120
 
121
+ We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents.
122
 
123
 
124
  ## Task Statistics