Upload dataset card
Browse files
README.md
CHANGED
@@ -26,4 +26,40 @@ configs:
|
|
26 |
data_files:
|
27 |
- split: test
|
28 |
path: data/test-*
|
|
|
|
|
|
|
|
|
|
|
29 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
data_files:
|
27 |
- split: test
|
28 |
path: data/test-*
|
29 |
+
license: cc-by-nc-2.0
|
30 |
+
language:
|
31 |
+
- en
|
32 |
+
size_categories:
|
33 |
+
- 1K<n<10K
|
34 |
---
|
35 |
+
# PubMedQA subset of HaluBench
|
36 |
+
## Dataset
|
37 |
+
This dataset contains the PubMedQA subset of HaluBench, created by Patronus AI and available from [PatronusAI/HaluBench](https://huggingface.co/datasets/PatronusAI/HaluBench)
|
38 |
+
|
39 |
+
The dataset was originally published in the paper _[PubMedQA: A Dataset for Biomedical Research Question Answering](https://arxiv.org/abs/1909.06146)_
|
40 |
+
|
41 |
+
The PubMedQA subset of HaluBench contains additional perturbations to the original dataset to generate hallucinated answers that appear plausible but are not faithful to the context as described in _[Lynx: An Open Source Hallucination Evaluation Model](https://arxiv.org/abs/2407.08488)_
|
42 |
+
|
43 |
+
## Preprocessing
|
44 |
+
We mapped the original hallucination labels as follows:
|
45 |
+
- "PASS" or no hallucination to 1
|
46 |
+
- "FAIL" or hallucination to 0
|
47 |
+
|
48 |
+
## Evaluation criteria and rubric
|
49 |
+
We aligned our evaluation criteria and rubric with the one used in the Lynx paper. These criteria and rubrics are used by the LM judge to produce the score for each response.
|
50 |
+
|
51 |
+
```python
|
52 |
+
EVALUATION_CRITERIA = "Evaluate whether the information provided in the answer is factually accurate and directly supported by the context given in the document, without any fabricated or hallucinated details."
|
53 |
+
|
54 |
+
RUBRIC = [
|
55 |
+
{
|
56 |
+
"score": 0,
|
57 |
+
"description": "The answer is not supported by the document. It contains inaccuracies, fabrications, or details that are not present in the document."
|
58 |
+
},
|
59 |
+
{
|
60 |
+
"score": 1,
|
61 |
+
"description": "The answer is fully supported by the document. It is factually accurate and all details are directly derived from the document."
|
62 |
+
}
|
63 |
+
|
64 |
+
]
|
65 |
+
```
|