readme
Browse files- .gitignore +1 -0
- README.md +87 -1
- loading.py +64 -0
.gitignore
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
./loading.py
|
README.md
CHANGED
@@ -158,5 +158,91 @@ configs:
|
|
158 |
path: MIRACL/keyphrases/*
|
159 |
---
|
160 |
|
161 |
-
# DAPR
|
162 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
path: MIRACL/keyphrases/*
|
159 |
---
|
160 |
|
161 |
+
# DAPR: Document-Aware Passage Retrieval
|
162 |
|
163 |
+
This datasets repo contains the queries, passages/documents and judgements for the data used in the [DAPR](https://arxiv.org/abs/2305.13915) paper.
|
164 |
+
|
165 |
+
## Overview
|
166 |
+
For the DAPR benchmark, it contains 5 datasets:
|
167 |
+
| Dataset | #Queries (test) | #Documents | #Passages
|
168 |
+
| --- | --- | --- | --- |
|
169 |
+
| MS MARCO | 2,722 | 1,359,163 | 2,383,023* |
|
170 |
+
| Natural Questions | 3,610 | 108,626 | 2,682,017|
|
171 |
+
| MIRACL | 799 | 5,758,285 |32,893,221|
|
172 |
+
| Genomics | 62 | 162,259 |12,641,127|
|
173 |
+
| ConditionalQA | 271 | 652 |69,199|
|
174 |
+
|
175 |
+
And additionally, NQ-hard, the hard subset of queries from Natural Questions is also included (516 in total). These queries are hard because understanding the document context (e.g. coreference, main topic, multi-hop reasoning, and acronym) is necessary for retrieving the relevant passages.
|
176 |
+
|
177 |
+
> Notes: for MS MARCO, its documents do not provide the gold paragraph segmentation and we only segment the document by keeping the judged passages (from the MS MARCO Passage Ranking task) standing out while leaving the rest parts surrounding these passages. These passages are marked by `is_candidate==true`.
|
178 |
+
|
179 |
+
## Load the dataset
|
180 |
+
### Loading the passages
|
181 |
+
One can load the passages like this:
|
182 |
+
```python
|
183 |
+
from datasets import load_dataset
|
184 |
+
|
185 |
+
dataset_name = "ConditionalQA"
|
186 |
+
passages = load_dataset("kwang2049/dapr", f"{dataset_name}-corpus", split="test")
|
187 |
+
for passage in passages:
|
188 |
+
passage["_id"] # passage id
|
189 |
+
passage["text"] # passage text
|
190 |
+
passage["title"] # doc title
|
191 |
+
passage["doc_id"]
|
192 |
+
passage["paragraph_no"] # the paragraph number within the document
|
193 |
+
passage["total_paragraphs"] # how many paragraphs/passages in total in the document
|
194 |
+
passage["is_candidate"] # is this passage a candidate for retrieval
|
195 |
+
```
|
196 |
+
|
197 |
+
Or strem the dataset without downloading it beforehand:
|
198 |
+
```python
|
199 |
+
from datasets import load_dataset
|
200 |
+
|
201 |
+
dataset_name = "ConditionalQA"
|
202 |
+
passages = load_dataset(
|
203 |
+
"kwang2049/dapr", f"{dataset_name}-corpus", split="test", streaming=True
|
204 |
+
)
|
205 |
+
for passage in passages:
|
206 |
+
passage["_id"] # passage id
|
207 |
+
passage["text"] # passage text
|
208 |
+
passage["title"] # doc title
|
209 |
+
passage["doc_id"]
|
210 |
+
passage["paragraph_no"] # the paragraph number within the document
|
211 |
+
passage["total_paragraphs"] # how many paragraphs/passages in total in the document
|
212 |
+
passage["is_candidate"] # is this passage a candidate for retrieval
|
213 |
+
```
|
214 |
+
|
215 |
+
### Loading the qrels
|
216 |
+
The qrels split contains the query relevance annotation, i.e., it contains the relevance score for (query, passage) pairs.
|
217 |
+
```python
|
218 |
+
from datasets import load_dataset
|
219 |
+
|
220 |
+
dataset_name = "ConditionalQA"
|
221 |
+
qrels = load_dataset("kwang2049/dapr", f"{dataset_name}-qrels", split="test")
|
222 |
+
for qrel in qrels:
|
223 |
+
qrel["query_id"] # query id (the text is available in ConditionalQA-queries)
|
224 |
+
qrel["corpus_id"] # passage id
|
225 |
+
qrel["score"] # gold judgement
|
226 |
+
|
227 |
+
```
|
228 |
+
We present the NQ-hard dataset in an extended format of the normal qrels with additional columns:
|
229 |
+
```python
|
230 |
+
from datasets import load_dataset
|
231 |
+
|
232 |
+
qrels = load_dataset("kwang2049/dapr", "nq-hard", split="test")
|
233 |
+
for qrel in qrels:
|
234 |
+
qrel["query_id"] # query id (the text is available in ConditionalQA-queries)
|
235 |
+
qrel["corpus_id"] # passage id
|
236 |
+
qrel["score"] # gold judgement
|
237 |
+
|
238 |
+
# Additional columns:
|
239 |
+
qrel["query"] # query text
|
240 |
+
qrel["text"] # passage text
|
241 |
+
qrel["title"] # doc title
|
242 |
+
qrel["doc_id"]
|
243 |
+
qrel["categories"] # list of categories about this query-passage pair
|
244 |
+
qrel["url"] # url to the document in Wikipedia
|
245 |
+
```
|
246 |
+
|
247 |
+
## Note
|
248 |
+
This dataset was created with `datasets==2.15.0`. Make sure to use this or a newer version of the datasets library.
|
loading.py
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
|
3 |
+
dataset_name = "ConditionalQA"
|
4 |
+
passages = load_dataset("kwang2049/dapr", f"{dataset_name}-corpus", split="test")
|
5 |
+
for passage in passages:
|
6 |
+
passage["_id"] # passage id
|
7 |
+
passage["text"] # passage text
|
8 |
+
passage["title"] # doc title
|
9 |
+
passage["doc_id"]
|
10 |
+
passage["paragraph_no"] # the paragraph number within the document
|
11 |
+
passage["total_paragraphs"] # how many paragraphs/passages in total in the document
|
12 |
+
passage["is_candidate"] # is this passage a candidate for retrieval
|
13 |
+
|
14 |
+
|
15 |
+
from datasets import load_dataset
|
16 |
+
|
17 |
+
dataset_name = "ConditionalQA"
|
18 |
+
passages = load_dataset(
|
19 |
+
"kwang2049/dapr", f"{dataset_name}-corpus", split="test", streaming=True
|
20 |
+
)
|
21 |
+
for passage in passages:
|
22 |
+
passage["_id"] # passage id
|
23 |
+
passage["text"] # passage text
|
24 |
+
passage["title"] # doc title
|
25 |
+
passage["doc_id"]
|
26 |
+
passage["paragraph_no"] # the paragraph number within the document
|
27 |
+
passage["total_paragraphs"] # how many paragraphs/passages in total in the document
|
28 |
+
passage["is_candidate"] # is this passage a candidate for retrieval
|
29 |
+
|
30 |
+
from datasets import load_dataset
|
31 |
+
|
32 |
+
dataset_name = "ConditionalQA"
|
33 |
+
docs = load_dataset("kwang2049/dapr", f"{dataset_name}-docs", split="test")
|
34 |
+
for doc in docs:
|
35 |
+
doc["doc_id"]
|
36 |
+
doc["title"] # doc title
|
37 |
+
doc["passage_ids"] # list of passage ids in the document
|
38 |
+
doc["passages"] # list of passage/paragraph texts in the document
|
39 |
+
|
40 |
+
from datasets import load_dataset
|
41 |
+
|
42 |
+
dataset_name = "ConditionalQA"
|
43 |
+
qrels = load_dataset("kwang2049/dapr", f"{dataset_name}-qrels", split="test")
|
44 |
+
for qrel in qrels:
|
45 |
+
qrel["query_id"] # query id (the text is available in ConditionalQA-queries)
|
46 |
+
qrel["corpus_id"] # passage id
|
47 |
+
qrel["score"] # gold judgement
|
48 |
+
|
49 |
+
|
50 |
+
from datasets import load_dataset
|
51 |
+
|
52 |
+
qrels = load_dataset("kwang2049/dapr", "nq-hard", split="test")
|
53 |
+
for qrel in qrels:
|
54 |
+
qrel["query_id"] # query id (the text is available in ConditionalQA-queries)
|
55 |
+
qrel["corpus_id"] # passage id
|
56 |
+
qrel["score"] # gold judgement
|
57 |
+
|
58 |
+
# Additional columns:
|
59 |
+
qrel["query"] # query text
|
60 |
+
qrel["text"] # passage text
|
61 |
+
qrel["title"] # doc title
|
62 |
+
qrel["doc_id"]
|
63 |
+
qrel["categories"] # list of categories about this query-passage pair
|
64 |
+
qrel["url"] # url to the document in Wikipedia
|