updated README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ license:
|
|
35 |
<img src="nomiracl.png" alt="NoMIRACL Hallucination Examination (Generated using miramuse.ai and Adobe photoshop)" width="500" height="400">
|
36 |
|
37 |
## Quick Overview
|
38 |
-
This repository contains the topics, qrels and top-k (a maximum of 10) annotated passages. The passage collection can be found
|
39 |
|
40 |
```
|
41 |
import datasets
|
@@ -45,21 +45,21 @@ subset = 'relevant' # or 'non_relevant' (two subsets: relevant & non-relevant)
|
|
45 |
split = 'test' # or 'dev' for the development split
|
46 |
|
47 |
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
|
48 |
-
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
|
49 |
```
|
50 |
|
51 |
## What is NoMIRACL?
|
52 |
-
Retrieval Augmented Generation (RAG) is a powerful approach to
|
53 |
|
54 |
-
NoMIRACL evaluates LLM relevance as a binary classification objective,
|
55 |
- *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
|
56 |
- *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).
|
57 |
|
58 |
## Acknowledgement
|
59 |
|
60 |
-
This dataset would not have been possible without all the topics are generated by native speakers of each language in
|
61 |
|
62 |
-
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found
|
63 |
|
64 |
## Quickstart
|
65 |
|
@@ -71,7 +71,7 @@ subset = 'relevant' # or 'non_relevant'
|
|
71 |
split = 'test' # or 'dev' for development split
|
72 |
|
73 |
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
|
74 |
-
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
|
75 |
```
|
76 |
|
77 |
|
@@ -84,7 +84,7 @@ nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{s
|
|
84 |
1. To download the files:
|
85 |
|
86 |
Under folders `data/{lang}`,
|
87 |
-
the subset of corpus is saved in `.jsonl.gz` format, with each line to be:
|
88 |
```
|
89 |
{"docid": "28742#27",
|
90 |
"title": "Supercontinent",
|
@@ -134,7 +134,7 @@ Paper: [https://aclanthology.org/2024.findings-emnlp.730/](https://aclanthology.
|
|
134 |
|
135 |
|
136 |
## Citation Information
|
137 |
-
This work was conducted as a collaboration between University of Waterloo and Huawei Technologies.
|
138 |
|
139 |
```
|
140 |
@inproceedings{thakur-etal-2024-knowing,
|
|
|
35 |
<img src="nomiracl.png" alt="NoMIRACL Hallucination Examination (Generated using miramuse.ai and Adobe photoshop)" width="500" height="400">
|
36 |
|
37 |
## Quick Overview
|
38 |
+
This repository contains the topics, qrels, and top-k (a maximum of 10) annotated passages. The passage collection can be found here on HF: [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus).
|
39 |
|
40 |
```
|
41 |
import datasets
|
|
|
45 |
split = 'test' # or 'dev' for the development split
|
46 |
|
47 |
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
|
48 |
+
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
|
49 |
```
|
50 |
|
51 |
## What is NoMIRACL?
|
52 |
+
Retrieval Augmented Generation (RAG) is a powerful approach to incorporating external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of LLM-generated responses. However, evaluating query-passage relevance across diverse language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a completely human-annotated dataset designed for evaluating multilingual LLM relevance across 18 diverse languages.
|
53 |
|
54 |
+
NoMIRACL evaluates LLM relevance as a binary classification objective, containing two subsets: `non-relevant` and `relevant`. The `non-relevant` subset contains queries with all passages manually judged by an expert assessor as non-relevant, while the `relevant` subset contains queries with at least one judged relevant passage within the labeled passages. LLM relevance is measured using two key metrics:
|
55 |
- *hallucination rate* (on the `non-relevant` subset) measuring model tendency to recognize when none of the passages provided are relevant for a given question (non-answerable).
|
56 |
- *error rate* (on the `relevant` subset) measuring model tendency as unable to identify relevant passages when provided for a given question (answerable).
|
57 |
|
58 |
## Acknowledgement
|
59 |
|
60 |
+
This dataset would not have been possible without all the topics are generated by native speakers of each language in conjunction with our **multilingual RAG universe** work in part 1, **MIRACL** [[TACL '23]](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering). The queries with all non-relevant passages are used to create the `non-relevant` subset whereas queries with at least a single relevant passage (i.e., MIRACL dev and test splits) are used to create `relevant` subset.
|
61 |
|
62 |
+
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found [here](https://huggingface.co/datasets/miracl/miracl-corpus).
|
63 |
|
64 |
## Quickstart
|
65 |
|
|
|
71 |
split = 'test' # or 'dev' for development split
|
72 |
|
73 |
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
|
74 |
+
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}', trust_remote_code=True)
|
75 |
```
|
76 |
|
77 |
|
|
|
84 |
1. To download the files:
|
85 |
|
86 |
Under folders `data/{lang}`,
|
87 |
+
the subset of the corpus is saved in `.jsonl.gz` format, with each line to be:
|
88 |
```
|
89 |
{"docid": "28742#27",
|
90 |
"title": "Supercontinent",
|
|
|
134 |
|
135 |
|
136 |
## Citation Information
|
137 |
+
This work was conducted as a collaboration between the University of Waterloo and Huawei Technologies.
|
138 |
|
139 |
```
|
140 |
@inproceedings{thakur-etal-2024-knowing,
|