Datasets:
Tasks:
Text Retrieval
Sub-tasks:
document-retrieval
Multilinguality:
multilingual
Annotations Creators:
expert-generated
ArXiv:
License:
crystina-z
commited on
Commit
β’
17eb868
1
Parent(s):
9cc3274
Update README.md
Browse files
README.md
CHANGED
@@ -51,4 +51,64 @@ This dataset contains the collection data of the 16 "known languages". The remai
|
|
51 |
|
52 |
The topics are generated by native speakers of each language, who also label the relevance between the topics and a given document list.
|
53 |
|
54 |
-
This repository only contains the topics and qrels of MIRACL. The collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
The topics are generated by native speakers of each language, who also label the relevance between the topics and a given document list.
|
53 |
|
54 |
+
This repository only contains the topics and qrels of MIRACL. The collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
|
55 |
+
|
56 |
+
## Dataset Structure
|
57 |
+
1. To download the files:
|
58 |
+
Under folders `miracl-v1.0-{lang}/topics`,
|
59 |
+
the topics are saved in `.tsv` format, with each line to be:
|
60 |
+
```
|
61 |
+
qid\tquery
|
62 |
+
```
|
63 |
+
|
64 |
+
Under folders `miracl-v1.0-{lang}/qrels`,
|
65 |
+
the qrels are saved in standard TREC format, with each line to be:
|
66 |
+
```
|
67 |
+
qid Q0 docid relevance
|
68 |
+
```
|
69 |
+
|
70 |
+
|
71 |
+
2. To access the data using HuggingFace `datasets`:
|
72 |
+
```
|
73 |
+
lang='ar' # or any of the 16 languages
|
74 |
+
miracl = datasets.load_dataset('miracl/miracl', lang, use_auth_token=True)
|
75 |
+
|
76 |
+
# training set:
|
77 |
+
for data in miracl['train']: # or 'dev'
|
78 |
+
query_id = data['query_id']
|
79 |
+
query = data['query']
|
80 |
+
positive_passages = data['positive_passages']
|
81 |
+
negative_passages = data['negative_passages']
|
82 |
+
|
83 |
+
for entry in positive_passages: # OR 'negative_passages'
|
84 |
+
docid = entry['docid']
|
85 |
+
title = entry['title']
|
86 |
+
text = entry['text']
|
87 |
+
```
|
88 |
+
The structure is the same for `train` and `dev` set.
|
89 |
+
Note that `negative_passages` are annotated by native speakers as well, instead of the non-positive passages from top-`k` retrieval results.
|
90 |
+
|
91 |
+
|
92 |
+
## Dataset Statistics
|
93 |
+
The following table contains the number of queries (`#Q`) and the number of judgments (`#J`) in each language, for the training and development set,
|
94 |
+
where the judgments include both positive and negative samples.
|
95 |
+
|
96 |
+
| Lang | Train | | Dev | |
|
97 |
+
|:----:|:-----:|:------:|:-----:|:------:|
|
98 |
+
| | #Q | #J | #Q | #J |
|
99 |
+
| ar | 3,495 | 25,382 | 2,896 | 29,197 |
|
100 |
+
| bn | 1,631 | 16,754 | 411 | 4,206 |
|
101 |
+
| en | 2,863 | 29,416 | 799 | 8,350 |
|
102 |
+
| es | 2,162 | 21,531 | 648 | 6,443 |
|
103 |
+
| fa | 2,107 | 21,844 | 632 | 6,571 |
|
104 |
+
| fi | 2,897 | 20,350 | 1,271 | 12,008 |
|
105 |
+
| fr | 1,143 | 11,426 | 343 | 3,429 |
|
106 |
+
| hi | 1,169 | 11,668 | 350 | 3,494 |
|
107 |
+
| id | 3,998 | 39,885 | 939 | 9,344 |
|
108 |
+
| ja | 3,477 | 34,387 | 860 | 8,354 |
|
109 |
+
| ko | 868 | 12,767 | 213 | 3,057 |
|
110 |
+
| ru | 4,683 | 33,921 | 1,252 | 13,100 |
|
111 |
+
| sw | 1,901 | 9,359 | 482 | 5,092 |
|
112 |
+
| te | 3,452 | 18,608 | 828 | 1,606 |
|
113 |
+
| th | 2,993 | 22,057 | 747 | 7,861 |
|
114 |
+
| zh | 1,312 | 13,113 | 393 | 3,928 |
|