nthakur commited on
Commit
7712349
β€’
1 Parent(s): 757f0f8

modified README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -40
README.md CHANGED
@@ -34,29 +34,47 @@ task_categories:
34
 
35
  license:
36
  - apache-2.0
37
-
38
- task_ids:
39
- - document-retrieval
40
  ---
41
 
42
  # Dataset Card for NoMIRACL
43
 
 
44
 
45
- ## Dataset Description
46
- * **Repository:** https://github.com/project-miracl/nomiracl
47
- * **Paper:** https://arxiv.org/abs/2312.11361
48
 
49
- <!-- MIRACL πŸŒπŸ™ŒπŸŒ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
50
 
51
- This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.
 
 
 
 
 
52
 
53
- The topics are generated by native speakers of each language, who also label the relevance between the topics and a given document list.
 
 
54
 
55
- This repository only contains the topics and qrels of MIRACL. The collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
 
 
 
 
 
 
56
 
57
  ## Dataset Structure
58
  1. To download the files:
59
- Under folders `miracl-v1.0-{lang}/topics`,
 
 
 
 
 
 
 
 
 
60
  the topics are saved in `.tsv` format, with each line to be:
61
  ```
62
  qid\tquery
@@ -68,14 +86,19 @@ the qrels are saved in standard TREC format, with each line to be:
68
  qid Q0 docid relevance
69
  ```
70
 
71
-
72
  2. To access the data using HuggingFace `datasets`:
73
  ```
74
- lang='ar' # or any of the 16 languages
75
- miracl = datasets.load_dataset('miracl/miracl', lang, use_auth_token=True)
 
 
 
 
 
 
76
 
77
  # training set:
78
- for data in miracl['train']: # or 'dev', 'testA'
79
  query_id = data['query_id']
80
  query = data['query']
81
  positive_passages = data['positive_passages']
@@ -86,30 +109,17 @@ for data in miracl['train']: # or 'dev', 'testA'
86
  title = entry['title']
87
  text = entry['text']
88
  ```
89
- The structure is the same for `train`, `dev`, and `testA` set, where `testA` only exists for languages in Mr. TyDi (i.e., Arabic, Bengali, English, Finnish, Indonesian, Japanese, Korean, Russian, Swahili, Telugu, Thai).
90
- Note that `negative_passages` are annotated by native speakers as well, instead of the non-positive passages from top-`k` retrieval results.
91
-
92
 
93
  ## Dataset Statistics
94
- The following table contains the number of queries (`#Q`) and the number of judgments (`#J`) in each language, for the training and development set,
95
- where the judgments include both positive and negative samples.
96
-
97
- | Lang | Train | | Dev | |
98
- |:----:|:-----:|:------:|:-----:|:------:|
99
- | | **#Q**| **#J** |**#Q** |**#J** |
100
- | ar | 3,495 | 25,382 | 2,896 | 29,197 |
101
- | bn | 1,631 | 16,754 | 411 | 4,206 |
102
- | en | 2,863 | 29,416 | 799 | 8,350 |
103
- | es | 2,162 | 21,531 | 648 | 6,443 |
104
- | fa | 2,107 | 21,844 | 632 | 6,571 |
105
- | fi | 2,897 | 20,350 | 1,271 | 12,008 |
106
- | fr | 1,143 | 11,426 | 343 | 3,429 |
107
- | hi | 1,169 | 11,668 | 350 | 3,494 |
108
- | id | 4,071 | 41,358 | 960 | 9,668 |
109
- | ja | 3,477 | 34,387 | 860 | 8,354 |
110
- | ko | 868 | 12,767 | 213 | 3,057 |
111
- | ru | 4,683 | 33,921 | 1,252 | 13,100 |
112
- | sw | 1,901 | 9,359 | 482 | 5,092 |
113
- | te | 3,452 | 18,608 | 828 | 1,606 |
114
- | th | 2,972 | 21,293 | 733 | 7,573 |
115
- | zh | 1,312 | 13,113 | 393 | 3,928 | -->
 
34
 
35
  license:
36
  - apache-2.0
 
 
 
37
  ---
38
 
39
  # Dataset Card for NoMIRACL
40
 
41
+ Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
42
 
43
+ NoMIRACL includes both a `non-relevant` and a `relevant` subset. The `non-relevant` subset contains queries with all passages manually judged as non-relevant or noisy, while the `relevant` subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
 
 
44
 
45
+ All the topics are generated by native speakers of each language from our work in [MIRACL](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering), who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the `non-relevant` subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create `relevant` subset.
46
 
47
+ This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
48
+
49
+ ## Quickstart
50
+
51
+ ```
52
+ import datasets
53
 
54
+ language = 'german' # or any of the 18 languages
55
+ subset = 'relevant' # or 'non_relevant'
56
+ split = 'test' # or 'dev' for development split
57
 
58
+ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset})
59
+ ```
60
+
61
+
62
+ ## Dataset Description
63
+ * **Repository:** https://github.com/project-miracl/nomiracl
64
+ * **Paper:** https://arxiv.org/abs/2312.11361
65
 
66
  ## Dataset Structure
67
  1. To download the files:
68
+
69
+ Under folders `data/{lang}`,
70
+ the subset of corpus is saved in `.jsonl.gz` format, with each line to be:
71
+ ```
72
+ {"docid": "28742#27",
73
+ "title": "Supercontinent",
74
+ "text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
75
+ ```
76
+
77
+ Under folders `data/{lang}/topics`,
78
  the topics are saved in `.tsv` format, with each line to be:
79
  ```
80
  qid\tquery
 
86
  qid Q0 docid relevance
87
  ```
88
 
 
89
  2. To access the data using HuggingFace `datasets`:
90
  ```
91
+ import datasets
92
+
93
+ language = 'german' # or any of the 18 languages
94
+ subset = 'relevant' # or 'non_relevant'
95
+ split = 'test' # or 'dev' for development split
96
+
97
+ # four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
98
+ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset})
99
 
100
  # training set:
101
+ for data in nomiracl: # or 'dev', 'testA'
102
  query_id = data['query_id']
103
  query = data['query']
104
  positive_passages = data['positive_passages']
 
109
  title = entry['title']
110
  text = entry['text']
111
  ```
 
 
 
112
 
113
  ## Dataset Statistics
114
+ For NoMIRACL dataset statistics, please refer to our publication [here](https://arxiv.org/abs/2312.11361).
115
+
116
+
117
+ ## Citation Information
118
+ ```
119
+ @article{thakur2023nomiracl,
120
+ title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
121
+ author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
122
+ journal={ArXiv},
123
+ year={2023},
124
+ volume={abs/2312.11361}
125
+ ```