--- configs: - config_name: narrativeqa data_files: - split: corpus path: narrativeqa/corpus.jsonl - split: queries path: narrativeqa/queries.jsonl - split: qrels path: narrativeqa/qrels.jsonl - config_name: summ_screen_fd data_files: - split: corpus path: summ_screen_fd/corpus.jsonl - split: queries path: summ_screen_fd/queries.jsonl - split: qrels path: summ_screen_fd/qrels.jsonl - config_name: qmsum data_files: - split: corpus path: qmsum/corpus.jsonl - split: queries path: qmsum/queries.jsonl - split: qrels path: qmsum/qrels.jsonl - config_name: 2wikimqa data_files: - split: corpus path: 2wikimqa/corpus.jsonl - split: queries path: 2wikimqa/queries.jsonl - split: qrels path: 2wikimqa/qrels.jsonl - config_name: passkey data_files: - split: corpus path: passkey/corpus.jsonl - split: queries path: passkey/queries.jsonl - split: qrels path: passkey/qrels.jsonl - config_name: needle data_files: - split: corpus path: needle/corpus.jsonl - split: queries path: needle/queries.jsonl - split: qrels path: needle/qrels.jsonl language: - en tags: - Long Context size_categories: - 1K=1.6.22). For the four real tasks, you can evaluate as follows: ```python from mteb import MTEB retrieval_task_list = ["LEMBSummScreenFDRetrieval", "LEMBQMSumRetrieval","LEMBWikimQARetrieval","LEMBNarrativeQARetrieval"] output_dict = {} evaluation = MTEB(tasks=retrieval_task_list) #TODO load the model before evaluation results = evaluation.run(model,output_folder=args.output_dir, overwrite_results=True, batch_size=args.batch_size,verbosity=0) for key, value in results.items(): split = "test" if "test" in value else "validation" output_dict[key] = {"ndcg@1": value[split]["ndcg_at_1"], "ndcg@10": value[split]["ndcg_at_10"]} print(output_dict) ``` For the two synthetic tasks, since we examine a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens, an additional parameter of `context_length` is required. You may evaluate as follows: ```python from mteb import MTEB needle_passkey_task_list = ["LEMBNeedleRetrieval", "LEMBPasskeyRetrieval"] output_dict = {} context_length_list = [256, 512, 1024, 2048, 4096, 8192, 16384, 32768] evaluation = MTEB(tasks=needle_passkey_task_list) #TODO load the model before evaluation results = evaluation.run(model, output_folder=args.output_dir, overwrite_results=True,batch_size=args.batch_size,verbosity=0) for key, value in results.items(): needle_passkey_score_list = [] for ctx_len in context_length_list: needle_passkey_score_list.append([ctx_len, value[f"test_{ctx_len}"]["ndcg_at_1"]]) needle_passkey_score_list.append(["avg", sum([x[1] for x in needle_passkey_score_list])/len(context_length_list)]) output_dict[key] = {item[0]: item[1] for item in needle_passkey_score_list} print(output_dict) ``` ## Task Description LongEmbed includes 4 real-world retrieval tasks curated from long-form QA and summarization. Note that for QA and summarization datasets, we use the questions and summaries as queries, respectively. - [NarrativeQA](https://huggingface.co/datasets/narrativeqa): A QA dataset comprising long stories averaging 50,474 words and corresponding questions about specific content such as characters, events. We adopt the `test` set of the original dataset. - [2WikiMultihopQA](https://huggingface.co/datasets/THUDM/LongBench/viewer/2wikimqa_e): A multi-hop QA dataset featuring questions with up to 5 hops, synthesized through manually designed templates to prevent shortcut solutions. We use the `test` split of the length-uniformly sampled version from [LongBench](https://huggingface.co/datasets/THUDM/LongBench). - [QMSum](https://huggingface.co/datasets/tau/scrolls/blob/main/qmsum.zip): A query-based meeting summarization dataset that requires selecting and summarizing relevant segments of meetings in response to queries. We use the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). Since its test set does not include ground truth summarizations, and its validation set only have 60 documents, which is too small for document retrieval, we include the `train` set in addition to the `validation` set. - [SummScreenFD](https://huggingface.co/datasets/tau/scrolls/blob/main/summ_screen_fd.zip): A screenplay summarization dataset comprising pairs of TV series transcripts and human-written summaries. Similar to QMSum, its plot details are scattered throughout the transcript and must be integrated to form succinct descriptions in the summary. We use `validation` set of the version processed by [SCROLLS](https://huggingface.co/datasets/tau/scrolls). We also include two synthetic tasks, namely needle and passkey retrieval. The former is tailored from the [Needle-in-a-Haystack Retrieval](https://github.com/gkamradt/LLMTest_NeedleInAHaystack) for LLMs. The later is adopted from [Personalized Passkey Retrieval](https://huggingface.co/datasets/intfloat/personalized_passkey_retrieval), with slight change for the efficiency of evaluation. The advantage of synthetic data is that we can flexibly control context length and distribution of target information. For both tasks, we evaluate a broad context range of {256, 512, 1024, 2048, 4096, 8192, 16384, 32768} tokens. For each context length, we include 50 test samples, each comprising 1 query and 100 candidate documents. ## Task Statistics | Dataset | Domain | # Queries | # Docs | Avg. Query Words | Avg. Doc Words | |---------|--------|-----------|--------|------------------|----------------| | NarrativeQA | Literature, File | 10,449 | 355 | 9 | 50,474 | | QMSum | Meeting | 1,527 | 197 | 71 | 10,058 | | 2WikimQA | Wikipedia | 300 | 300 | 12 | 6,132 | | SummScreenFD | ScreenWriting | 336 | 336 | 102 | 5,582 | | Passkey | Synthetic | 400 | 800 | 11 | - | | Needle | Synthetic | 400 | 800 | 7 | - | ## Citation If you find our paper helpful, please consider cite as follows: ``` ```