--- configs: - config_name: ConditionalQA-corpus data_files: - split: test path: ConditionalQA/corpus/* - config_name: ConditionalQA-corpus_coref data_files: - split: test path: ConditionalQA/corpus_coref/* - config_name: ConditionalQA-docs data_files: - split: test path: ConditionalQA/docs/* - config_name: ConditionalQA-keyphrases data_files: - split: test path: ConditionalQA/keyphrases/* - config_name: ConditionalQA-qrels data_files: - split: train path: ConditionalQA/qrels/train.parquet - split: dev path: ConditionalQA/qrels/dev.parquet - split: test path: ConditionalQA/qrels/test.parquet - config_name: ConditionalQA-queries data_files: - split: train path: ConditionalQA/queries/train.parquet - split: dev path: ConditionalQA/queries/dev.parquet - split: test path: ConditionalQA/queries/test.parquet - config_name: Genomics-corpus data_files: - split: test path: Genomics/corpus/* - config_name: Genomics-corpus_coref data_files: - split: test path: Genomics/corpus_coref/* - config_name: Genomics-docs data_files: - split: test path: Genomics/docs/* - config_name: Genomics-keyphrases data_files: - split: test path: Genomics/keyphrases/* - config_name: Genomics-qrels data_files: - split: test path: Genomics/qrels/test.parquet - config_name: Genomics-queries data_files: - split: test path: Genomics/queries/test.parquet - config_name: MIRACL-corpus data_files: - split: test path: MIRACL/corpus/* - config_name: MIRACL-corpus_coref data_files: - split: test path: MIRACL/corpus_coref/* - config_name: MIRACL-docs data_files: - split: test path: MIRACL/docs/* - config_name: MIRACL-keyphrases data_files: - split: test path: MIRACL/keyphrases/* - config_name: MIRACL-qrels data_files: - split: train path: MIRACL/qrels/train.parquet - split: dev path: MIRACL/qrels/dev.parquet - split: test path: MIRACL/qrels/test.parquet - config_name: MIRACL-queries data_files: - split: train path: MIRACL/queries/train.parquet - split: dev path: MIRACL/queries/dev.parquet - split: test path: MIRACL/queries/test.parquet - config_name: MSMARCO-corpus data_files: - split: test path: MSMARCO/corpus/* - config_name: MSMARCO-corpus_coref data_files: - split: test path: MSMARCO/corpus_coref/* - config_name: MSMARCO-docs data_files: - split: test path: MSMARCO/docs/* - config_name: MSMARCO-keyphrases data_files: - split: test path: MSMARCO/keyphrases/* - config_name: MSMARCO-qrels data_files: - split: train path: MSMARCO/qrels/train.parquet - split: dev path: MSMARCO/qrels/dev.parquet - split: test path: MSMARCO/qrels/test.parquet - config_name: MSMARCO-queries data_files: - split: train path: MSMARCO/queries/train.parquet - split: dev path: MSMARCO/queries/dev.parquet - split: test path: MSMARCO/queries/test.parquet - config_name: NaturalQuestions-corpus data_files: - split: test path: NaturalQuestions/corpus/* - config_name: NaturalQuestions-corpus_coref data_files: - split: test path: NaturalQuestions/corpus_coref/* - config_name: NaturalQuestions-docs data_files: - split: test path: NaturalQuestions/docs/* - config_name: NaturalQuestions-keyphrases data_files: - split: test path: NaturalQuestions/keyphrases/* - config_name: NaturalQuestions-qrels data_files: - split: dev path: NaturalQuestions/qrels/dev.parquet - split: test path: NaturalQuestions/qrels/test.parquet - config_name: NaturalQuestions-queries data_files: - split: dev path: NaturalQuestions/queries/dev.parquet - split: test path: NaturalQuestions/queries/test.parquet - config_name: default data_files: - split: test path: MIRACL/corpus_coref/test-* - config_name: nq-hard data_files: - split: test path: NaturalQuestions/nq-hard/* dataset_info: features: - name: _id dtype: string - name: text dtype: string - name: title dtype: string - name: doc_id dtype: string - name: paragraph_no dtype: int64 - name: total_paragraphs dtype: int64 - name: is_candidate dtype: bool splits: - name: test num_bytes: 16639778612 num_examples: 32893221 download_size: 8483447641 dataset_size: 16639778612 --- # DAPR: Document-Aware Passage Retrieval This datasets repo contains the queries, passages/documents and judgements for the data used in the [DAPR](https://arxiv.org/abs/2305.13915) paper. ## Overview For the DAPR benchmark, it contains 5 datasets: | Dataset | #Queries (test) | #Documents | #Passages | --- | --- | --- | --- | | [MS MARCO](https://microsoft.github.io/msmarco/) | 2,722 | 1,359,163 | 2,383,023* | | [Natural Questions](https://ai.google.com/research/NaturalQuestions) | 3,610 | 108,626 | 2,682,017| | [MIRACL](https://project-miracl.github.io/) | 799 | 5,758,285 |32,893,221| | [Genomics](https://dmice.ohsu.edu/trec-gen/) | 62 | 162,259 |12,641,127| | [ConditionalQA](https://haitian-sun.github.io/conditionalqa/) | 271 | 652 |69,199| And additionally, NQ-hard, the hard subset of queries from Natural Questions is also included (516 in total). These queries are hard because understanding the document context (e.g. coreference, main topic, multi-hop reasoning, and acronym) is necessary for retrieving the relevant passages. > Notes: for MS MARCO, its documents do not provide the gold paragraph segmentation and we only segment the document by keeping the judged passages (from the MS MARCO Passage Ranking task) standing out while leaving the rest parts surrounding these passages. These passages are marked by `is_candidate==true`. > For Natural Questions, the training split is not provided because the duplidate timestamps cannot be compatible with the queries/qrels/corpus format. Please refer to https://public.ukp.informatik.tu-darmstadt.de/kwang/dapr/data/NaturalQuestions/ for the training split. ## Load the dataset ### Loading the passages One can load the passages like this: ```python from datasets import load_dataset dataset_name = "ConditionalQA" passages = load_dataset("UKPLab/dapr", f"{dataset_name}-corpus", split="test") for passage in passages: passage["_id"] # passage id passage["text"] # passage text passage["title"] # doc title passage["doc_id"] passage["paragraph_no"] # the paragraph number within the document passage["total_paragraphs"] # how many paragraphs/passages in total in the document passage["is_candidate"] # is this passage a candidate for retrieval ``` Or strem the dataset without downloading it beforehand: ```python from datasets import load_dataset dataset_name = "ConditionalQA" passages = load_dataset( "UKPLab/dapr", f"{dataset_name}-corpus", split="test", streaming=True ) for passage in passages: passage["_id"] # passage id passage["text"] # passage text passage["title"] # doc title passage["doc_id"] passage["paragraph_no"] # the paragraph number within the document passage["total_paragraphs"] # how many paragraphs/passages in total in the document passage["is_candidate"] # is this passage a candidate for retrieval ``` ### Loading the qrels The qrels split contains the query relevance annotation, i.e., it contains the relevance score for (query, passage) pairs. ```python from datasets import load_dataset dataset_name = "ConditionalQA" qrels = load_dataset("UKPLab/dapr", f"{dataset_name}-qrels", split="test") for qrel in qrels: qrel["query_id"] # query id (the text is available in ConditionalQA-queries) qrel["corpus_id"] # passage id qrel["score"] # gold judgement ``` We present the NQ-hard dataset in an extended format of the normal qrels with additional columns: ```python from datasets import load_dataset qrels = load_dataset("UKPLab/dapr", "nq-hard", split="test") for qrel in qrels: qrel["query_id"] # query id (the text is available in ConditionalQA-queries) qrel["corpus_id"] # passage id qrel["score"] # gold judgement # Additional columns: qrel["query"] # query text qrel["text"] # passage text qrel["title"] # doc title qrel["doc_id"] qrel["categories"] # list of categories about this query-passage pair qrel["url"] # url to the document in Wikipedia ``` ## Retrieval and Evaluation The following shows an example, how the dataset can be used to build a semantic search application. > This example is based on [clddp](https://github.com/kwang2049/clddp/tree/main) (`pip install -U cldpp`). One can further explore this [example](https://github.com/kwang2049/clddp/blob/main/examples/search_fiqa.sh) for convenient multi-GPU exact search. ```python # Please install cldpp with `pip install -U cldpp` from clddp.retriever import Retriever, RetrieverConfig, Pooling, SimilarityFunction from clddp.dm import Separator from typing import Dict from clddp.dm import Query, Passage import torch import pytrec_eval import numpy as np from datasets import load_dataset # Define the retriever (DRAGON+ from https://arxiv.org/abs/2302.07452) class DRAGONPlus(Retriever): def __init__(self) -> None: config = RetrieverConfig( query_model_name_or_path="facebook/dragon-plus-query-encoder", passage_model_name_or_path="facebook/dragon-plus-context-encoder", shared_encoder=False, sep=Separator.blank, pooling=Pooling.cls, similarity_function=SimilarityFunction.dot_product, query_max_length=512, passage_max_length=512, ) super().__init__(config) # Load data: passages = load_dataset("UKPLab/dapr", "ConditionalQA-corpus", split="test") queries = load_dataset("UKPLab/dapr", "ConditionalQA-queries", split="test") qrels_rows = load_dataset("UKPLab/dapr", "ConditionalQA-qrels", split="test") qrels: Dict[str, Dict[str, float]] = {} for qrel_row in qrels_rows: qid = qrel_row["query_id"] pid = qrel_row["corpus_id"] rel = qrel_row["score"] qrels.setdefault(qid, {}) qrels[qid][pid] = rel # Encode queries and passages: (refer to https://github.com/kwang2049/clddp/blob/main/examples/search_fiqa.sh for multi-GPU exact search) retriever = DRAGONPlus() retriever.eval() queries = [Query(query_id=query["_id"], text=query["text"]) for query in queries] passages = [ Passage(passage_id=passage["_id"], text=passage["text"]) for passage in passages ] query_embeddings = retriever.encode_queries(queries) with torch.no_grad(): # Takes around a minute on a V100 GPU passage_embeddings, passage_mask = retriever.encode_passages(passages) # Calculate the similarities and keep top-K: similarity_scores = torch.matmul( query_embeddings, passage_embeddings.t() ) # (query_num, passage_num) topk = torch.topk(similarity_scores, k=10) topk_values: torch.Tensor = topk[0] topk_indices: torch.LongTensor = topk[1] topk_value_lists = topk_values.tolist() topk_index_lists = topk_indices.tolist() # Run evaluation with pytrec_eval: retrieval_scores: Dict[str, Dict[str, float]] = {} for query_i, (values, indices) in enumerate(zip(topk_value_lists, topk_index_lists)): query_id = queries[query_i].query_id retrieval_scores.setdefault(query_id, {}) for value, passage_i in zip(values, indices): passage_id = passages[passage_i].passage_id retrieval_scores[query_id][passage_id] = value evaluator = pytrec_eval.RelevanceEvaluator( query_relevance=qrels, measures=["ndcg_cut_10"] ) query_performances: Dict[str, Dict[str, float]] = evaluator.evaluate(retrieval_scores) ndcg = np.mean([score["ndcg_cut_10"] for score in query_performances.values()]) print(ndcg) # 0.21796083196880855 ``` ## Note This dataset was created with `datasets==2.15.0`. Make sure to use this or a newer version of the datasets library. ## Citation If you use the code/data, feel free to cite our publication [DAPR: A Benchmark on Document-Aware Passage Retrieval](https://arxiv.org/abs/2305.13915): ```bibtex @article{wang2023dapr, title = "DAPR: A Benchmark on Document-Aware Passage Retrieval", author = "Kexin Wang and Nils Reimers and Iryna Gurevych", journal= "arXiv preprint arXiv:2305.13915", year = "2023", url = "https://arxiv.org/abs/2305.13915", } ```