Datasets:

Multilinguality:
multilingual
Annotations Creators:
expert-generated
ArXiv:
License:
miracl / README.md
crystina-z's picture
Update README.md
9f20f64 verified
metadata
annotations_creators:
  - expert-generated
language:
  - ar
  - bn
  - en
  - es
  - fa
  - fi
  - fr
  - hi
  - id
  - ja
  - ko
  - ru
  - sw
  - te
  - th
  - zh
  - de
  - yo
multilinguality:
  - multilingual
pretty_name: MIRACL-corpus
source_datasets: []
task_categories:
  - text-retrieval
license:
  - apache-2.0
task_ids:
  - document-retrieval

Dataset Card for MIRACL (Topics and Qrels)

Dataset Description

Homepage | Repository: | Paper | ArXiv

MIRACL πŸŒπŸ™ŒπŸŒ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.

This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.

The topics are generated by native speakers of each language, who also label the relevance between the topics and a given document list.

This repository only contains the topics and qrels of MIRACL. The collection can be found here.

Dataset Structure

  1. To download the files: Under folders miracl-v1.0-{lang}/topics, the topics are saved in .tsv format, with each line to be:
qid\tquery

Under folders miracl-v1.0-{lang}/qrels, the qrels are saved in standard TREC format, with each line to be:

qid Q0 docid relevance
  1. To access the data using HuggingFace datasets:
lang='ar'  # or any of the 16 languages
miracl = datasets.load_dataset('miracl/miracl', lang, use_auth_token=True)

# training set:
for data in miracl['train']:  # or 'dev', 'testA'
  query_id = data['query_id']
  query = data['query']
  positive_passages = data['positive_passages']
  negative_passages = data['negative_passages']
  
  for entry in positive_passages: # OR 'negative_passages'
    docid = entry['docid']
    title = entry['title']
    text = entry['text']

The structure is the same for train, dev, and testA set, where testA only exists for languages in Mr. TyDi (i.e., Arabic, Bengali, English, Finnish, Indonesian, Japanese, Korean, Russian, Swahili, Telugu, Thai). Note that negative_passages are annotated by native speakers as well, instead of the non-positive passages from top-k retrieval results.

Dataset Statistics

The following table contains the number of queries (#Q) and the number of judgments (#J) in each language, for the training and development set, where the judgments include both positive and negative samples.

Lang Train Dev
#Q #J #Q #J
ar 3,495 25,382 2,896 29,197
bn 1,631 16,754 411 4,206
en 2,863 29,416 799 8,350
es 2,162 21,531 648 6,443
fa 2,107 21,844 632 6,571
fi 2,897 20,350 1,271 12,008
fr 1,143 11,426 343 3,429
hi 1,169 11,668 350 3,494
id 4,071 41,358 960 9,668
ja 3,477 34,387 860 8,354
ko 868 12,767 213 3,057
ru 4,683 33,921 1,252 13,100
sw 1,901 9,359 482 5,092
te 3,452 18,608 828 1,606
th 2,972 21,293 733 7,573
zh 1,312 13,113 393 3,928