Datasets:

Modalities:
Tabular
Formats:
csv
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
Search is not available for this dataset
query-id
int64
2
495
corpus-id
int64
7.26k
264k
score
int64
1
1
67
72,915
1
239
37,223
1
495
250,048
1
283
125,841
1
200
213,374
1
212
186,330
1
317
58,078
1
43
137,620
1
472
112,481
1
3
227,144
1
467
83,918
1
471
112,481
1
264
233,537
1
2
167,295
1
292
154,508
1
222
87,716
1
428
203,239
1
164
251,543
1
344
110,479
1
223
87,716
1
28
119,603
1
476
181,670
1
256
76,455
1
387
115,966
1
490
258,207
1
183
175,568
1
308
95,751
1
145
165,589
1
202
29,423
1
237
37,223
1
459
55,312
1
364
102,440
1
407
241,283
1
353
68,824
1
188
246,441
1
182
263,989
1
304
15,333
1
396
41,153
1
475
181,670
1
371
37,410
1
7
90,835
1
430
87,282
1
132
34,741
1
417
86,950
1
393
253,140
1
51
51,562
1
276
181,421
1
242
49,861
1
277
181,421
1
433
143,972
1
352
68,824
1
124
91,918
1
171
169,696
1
447
205,498
1
236
141,972
1
470
247,583
1
184
175,568
1
85
7,255
1
316
58,078
1
106
245,524
1
166
175,845
1
233
239,531
1
336
177,617
1
280
117,073
1
325
216,285
1
90
148,457
1
262
161,029
1
224
87,716
1
414
44,995
1
110
232,280
1
312
67,901
1
12
116,761
1
108
232,280
1
210
186,330
1
421
221,294
1
491
258,207
1
254
13,758
1
221
190,549
1
485
170,661
1
293
154,508
1
269
216,339
1
299
157,492
1
341
88,597
1
125
91,918
1
234
141,972
1
56
81,791
1
141
65,176
1
142
65,176
1
439
93,791
1
419
86,950
1
369
37,410
1
266
233,537
1
42
137,620
1
394
253,140
1
119
107,957
1
61
18,475
1
197
115,182
1
81
198,092
1
249
118,331
1
133
34,741
1

NASA-IR benchmark

NASA SMD and IBM Research developed a domain-specific information retrieval benchmark, NASA-IR, spanning almost 500 question-answer pairs related to the Earth science, planetary science, heliophysics, astrophysics, and biological physical sciences domains. Specifically, we sampled a set of 166 paragraphs from AGU, AMS, ADS, PMC, and PubMed and manually annotated with 3 questions that are answerable from each of these paragraphs, resulting in 498 questions. We used 398 of these questions as the training set and the remaining 100 as the validation set.

To comprehensively evaluate the information retrieval systems and mimic the real-world data, we combined 26,839 random ADS abstracts with these annotated paragraphs. On average, each query is 12 words long, and each paragraph is 120 words long. We used Recall@10 as the evaluation metric since each question has only one relevant document.

Evaluation results

image/png

Note This dataset is released in support of the training and evaluation of the encoder language model "Indus".

Accompanying paper can be found here: https://arxiv.org/abs/2405.10725

Downloads last month
69

Models trained or fine-tuned on nasa-impact/nasa-smd-IR-benchmark