File size: 8,078 Bytes
b755ff9
9ca02d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae55b0d
 
 
 
9ca02d1
 
 
 
b755ff9
9ca02d1
0713ba0
52c6153
9ca02d1
52c6153
0b5c7ab
9ca02d1
ecd0877
7c95c3e
 
 
 
 
 
 
0b5c7ab
7c95c3e
 
 
0b5c7ab
7c95c3e
0b5c7ab
7c95c3e
 
9ca02d1
7c95c3e
 
0b5c7ab
9ca02d1
0b5c7ab
7712349
 
 
ecd0877
7712349
9ca02d1
7712349
 
 
9ca02d1
a26eef4
0b5c7ab
7712349
 
 
 
7c95c3e
 
7712349
9ca02d1
 
5911804
7712349
5911804
0b5c7ab
5911804
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ecd0877
7712349
 
 
 
 
 
 
5911804
9ca02d1
7c95c3e
5911804
9ca02d1
 
 
 
 
 
 
 
 
 
 
 
7c95c3e
 
 
7712349
 
 
0b5c7ab
7c95c3e
7712349
7c95c3e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae55b0d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
multilinguality:
- multilingual
pretty_name: NoMIRACL
size_categories:
- 10K<n<100K
source_datasets:
- miracl/miracl
task_categories:
- text-classification
license:
- apache-2.0
---

# Dataset Card for NoMIRACL (EMNLP 2024 Findings Track)
<img src="nomiracl.png" alt="NoMIRACL Hallucination Examination (Generated using miramuse.ai and Adobe photoshop)" width="500" height="400">

## Quick Overview
This repository contains the topics, qrels, and top-k (a maximum of 10) annotated passages. The passage collection can be found here on HF: [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).

```python
import datasets

language = 'german'  # or any of the 18 languages (mentioned above in `languages`)
subset = 'relevant'  # or 'non_relevant' (two subsets: relevant & non-relevant)
split = 'test'       # or 'dev' for the development split

# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
```

## What is NoMIRACL?
Retrieval Augmented Generation (RAG) is a powerful approach to incorporating external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of LLM-generated responses. However, evaluating query-passage relevance across diverse language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a completely human-annotated dataset designed for evaluating multilingual LLM relevance across 18 diverse languages.

NoMIRACL evaluates LLM relevance as a binary classification objective, containing two subsets: `non-relevant` and `relevant`. The `non-relevant` subset contains queries with all passages manually judged by an expert assessor as non-relevant, while the `relevant` subset contains queries with at least one judged relevant passage within the labeled passages. LLM relevance is measured using two key metrics: 
- *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
- *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).

## Acknowledgement

This dataset would not have been possible without all the topics are generated by native speakers of each language in conjunction with our **multilingual RAG universe** work in part 1, **MIRACL** [[TACL '23]](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering). The queries with all non-relevant passages are used to create the `non-relevant` subset whereas queries with at least a single relevant passage (i.e., MIRACL dev and test splits) are used to create `relevant` subset.

This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).

## Quickstart

```python
import datasets

language = 'german'  # or any of the 18 languages
subset = 'relevant'  # or 'non_relevant'
split = 'test'       # or 'dev' for development split

# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
```


## Dataset Description
* **Website:** https://nomiracl.github.io
* **Paper:** https://aclanthology.org/2024.findings-emnlp.730/
* **Repository:** https://github.com/project-miracl/nomiracl

## Dataset Structure
1. To download the files:

Under folders `data/{lang}`,
the subset of the corpus is saved in `.jsonl.gz` format, with each line to be:
```
{"docid": "28742#27", 
"title": "Supercontinent", 
"text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
```

Under folders `data/{lang}/topics`,
the topics are saved in `.tsv` format, with each line to be:
```
qid\tquery
```

Under folders `miracl-v1.0-{lang}/qrels`,
the qrels are saved in standard TREC format, with each line to be:
```
qid Q0 docid relevance
```

2. To access the data using HuggingFace `datasets`:
```python
import datasets

language = 'german'  # or any of the 18 languages
subset = 'relevant'  # or 'non_relevant'
split = 'test'       # or 'dev' for development split

# four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')

# Individual entry in `relevant` or `non_relevant` subset
for data in nomiracl:  # or 'dev', 'testA'
  query_id = data['query_id']
  query = data['query']
  positive_passages = data['positive_passages']
  negative_passages = data['negative_passages']
  
  for entry in positive_passages: # OR 'negative_passages'
    docid = entry['docid']
    title = entry['title']
    text = entry['text']
```

## Dataset Statistics 
For NoMIRACL dataset statistics, please refer to our EMNLP 2024 Findings publication.

Paper: [https://aclanthology.org/2024.findings-emnlp.730/](https://aclanthology.org/2024.findings-emnlp.730/).


## Citation Information
This work was conducted as a collaboration between the University of Waterloo and Huawei Technologies.

```
@inproceedings{thakur-etal-2024-knowing,
    title = "{``}Knowing When You Don{'}t Know{''}: A Multilingual Relevance Assessment Dataset for Robust Retrieval-Augmented Generation",
    author = "Thakur, Nandan  and
      Bonifacio, Luiz  and
      Zhang, Crystina  and
      Ogundepo, Odunayo  and
      Kamalloo, Ehsan  and
      Alfonso-Hermelo, David  and
      Li, Xiaoguang  and
      Liu, Qun  and
      Chen, Boxing  and
      Rezagholizadeh, Mehdi  and
      Lin, Jimmy",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.730",
    pages = "12508--12526",
    abstract = "Retrieval-Augmented Generation (RAG) grounds Large Language Model (LLM) output by leveraging external knowledge sources to reduce factual hallucinations. However, prior work lacks a comprehensive evaluation of different language families, making it challenging to evaluate LLM robustness against errors in external retrieved knowledge. To overcome this, we establish **NoMIRACL**, a human-annotated dataset for evaluating LLM robustness in RAG across 18 typologically diverse languages. NoMIRACL includes both a non-relevant and a relevant subset. Queries in the non-relevant subset contain passages judged as non-relevant, whereas queries in the relevant subset include at least a single judged relevant passage. We measure relevance assessment using: (i) *hallucination rate*, measuring model tendency to hallucinate when the answer is not present in passages in the non-relevant subset, and (ii) *error rate*, measuring model inaccuracy to recognize relevant passages in the relevant subset. In our work, we observe that most models struggle to balance the two capacities. Models such as LLAMA-2 and Orca-2 achieve over 88{\%} hallucination rate on the non-relevant subset. Mistral and LLAMA-3 hallucinate less but can achieve up to a 74.9{\%} error rate on the relevant subset. Overall, GPT-4 is observed to provide the best tradeoff on both subsets, highlighting future work necessary to improve LLM robustness. NoMIRACL dataset and evaluation code are available at: https://github.com/project-miracl/nomiracl.",
}
```