Datasets:
metadata
paperswithcode_id: null
Dataset Card for "wiki_dpr"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/facebookresearch/DPR
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 406068.98 MB
- Size of the generated dataset: 448718.73 MB
- Total amount of disk used: 932739.13 MB
Dataset Summary
This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model. It contains 21M passages from wikipedia along with their DPR embeddings. The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages.
Supported Tasks and Leaderboards
Languages
Dataset Structure
We show detailed information for up to 5 configurations of the dataset.
Data Instances
psgs_w100.multiset.compressed
- Size of downloaded dataset files: 67678.16 MB
- Size of the generated dataset: 74786.45 MB
- Total amount of disk used: 145204.14 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
"id": "0",
"text": "his is the text of a dummy passag",
"title": "Title of the article"
}
psgs_w100.multiset.exact
- Size of downloaded dataset files: 67678.16 MB
- Size of the generated dataset: 74786.45 MB
- Total amount of disk used: 178700.81 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
"id": "0",
"text": "his is the text of a dummy passag",
"title": "Title of the article"
}
psgs_w100.multiset.no_index
- Size of downloaded dataset files: 67678.16 MB
- Size of the generated dataset: 74786.45 MB
- Total amount of disk used: 142464.62 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
"id": "0",
"text": "his is the text of a dummy passag",
"title": "Title of the article"
}
psgs_w100.nq.compressed
- Size of downloaded dataset files: 67678.16 MB
- Size of the generated dataset: 74786.45 MB
- Total amount of disk used: 145204.14 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
"id": "0",
"text": "his is the text of a dummy passag",
"title": "Title of the article"
}
psgs_w100.nq.exact
- Size of downloaded dataset files: 67678.16 MB
- Size of the generated dataset: 74786.45 MB
- Total amount of disk used: 178700.81 MB
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"embeddings": "[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1....",
"id": "0",
"text": "his is the text of a dummy passag",
"title": "Title of the article"
}
Data Fields
The data fields are the same among all splits.
psgs_w100.multiset.compressed
id
: astring
feature.text
: astring
feature.title
: astring
feature.embeddings
: alist
offloat32
features.
psgs_w100.multiset.exact
id
: astring
feature.text
: astring
feature.title
: astring
feature.embeddings
: alist
offloat32
features.
psgs_w100.multiset.no_index
id
: astring
feature.text
: astring
feature.title
: astring
feature.embeddings
: alist
offloat32
features.
psgs_w100.nq.compressed
id
: astring
feature.text
: astring
feature.title
: astring
feature.embeddings
: alist
offloat32
features.
psgs_w100.nq.exact
id
: astring
feature.text
: astring
feature.title
: astring
feature.embeddings
: alist
offloat32
features.
Data Splits
name | train |
---|---|
psgs_w100.multiset.compressed | 21015300 |
psgs_w100.multiset.exact | 21015300 |
psgs_w100.multiset.no_index | 21015300 |
psgs_w100.nq.compressed | 21015300 |
psgs_w100.nq.exact | 21015300 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Contributions
Thanks to @thomwolf, @lewtun, @lhoestq for adding this dataset.