Back to all datasets
Dataset: wiki_dpr 🏷
Update on GitHub

How to load this dataset directly with the 🤗/nlp library:

				
Copy to clipboard
from nlp import load_dataset dataset = load_dataset("wiki_dpr")

Description

This is the wikipedia split used to evaluate the Dense Passage Retrieval (DPR) model. It contains 21M passages from wikipedia along with their DPR embeddings. The wikipedia articles were split into multiple, disjoint text blocks of 100 words as passages.

Citation

@misc{karpukhin2020dense,
    title={Dense Passage Retrieval for Open-Domain Question Answering},
    author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
    year={2020},
    eprint={2004.04906},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

Models trained or fine-tuned on wiki_dpr

None yet. Start fine-tuning now =)