nomir / README.md
amitbcp's picture
updated readme
9d24f13
---
annotations_creators:
- expert-generated
language:
- ar
- bn
- en
- es
- fa
- fi
- fr
- hi
- id
- ja
- ko
- ru
- sw
- te
- th
- zh
- pt
multilinguality:
- multilingual
pretty_name: NoMIRACL
size_categories:
- 10K<n<100K
source_datasets:
- miracl/miracl
task_categories:
- text-classification
license:
- apache-2.0
---
# Dataset Card for NoMIRACL
Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
NoMIRACL includes both a `non-relevant` and a `relevant` subset. The `non-relevant` subset contains queries with all passages manually judged as non-relevant or noisy, while the `relevant` subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
All the topics are generated by native speakers of each language from our work in [MIRACL](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering), who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the `non-relevant` subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create `relevant` subset.
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
## Quickstart
```
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
```
## Dataset Description
* **Repository:** https://github.com/project-miracl/nomiracl
* **Paper:** https://arxiv.org/abs/2312.11361
## Dataset Structure
1. To download the files:
Under folders `data/{lang}`,
the subset of corpus is saved in `.jsonl.gz` format, with each line to be:
```
{"docid": "28742#27",
"title": "Supercontinent",
"text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
```
Under folders `data/{lang}/topics`,
the topics are saved in `.tsv` format, with each line to be:
```
qid\tquery
```
Under folders `miracl-v1.0-{lang}/qrels`,
the qrels are saved in standard TREC format, with each line to be:
```
qid Q0 docid relevance
```
2. To access the data using HuggingFace `datasets`:
```
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
# training set:
for data in nomiracl: # or 'dev', 'testA'
query_id = data['query_id']
query = data['query']
positive_passages = data['positive_passages']
negative_passages = data['negative_passages']
for entry in positive_passages: # OR 'negative_passages'
docid = entry['docid']
title = entry['title']
text = entry['text']
```
## Dataset Statistics
For NoMIRACL dataset statistics, please refer to our publication [here](https://arxiv.org/abs/2312.11361).
## Citation Information
```
@article{thakur2023nomiracl,
title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
journal={ArXiv},
year={2023},
volume={abs/2312.11361}
```