Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,123 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
task_categories:
|
4 |
+
- question-answering
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
tags:
|
8 |
+
- chemistry
|
9 |
+
- molecule
|
10 |
+
---
|
11 |
+
# Dataset Card for MoleculeQA
|
12 |
+
|
13 |
+
<!-- Provide a quick summary of the dataset. -->
|
14 |
+
|
15 |
+
## Dataset Details
|
16 |
+
|
17 |
+
### Dataset Description
|
18 |
+
|
19 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
20 |
+
[MoleculeQA: A Dataset to Evaluate Factual Accuracy in Molecular Comprehension (EMNLP 2024)](https://aclanthology.org/2024.findings-emnlp.216)
|
21 |
+
|
22 |
+
|
23 |
+
- **Curated by:** [IDEA-XL](https://github.com/IDEA-XL)
|
24 |
+
- **Language(s) (NLP):** en
|
25 |
+
- **License:** mit
|
26 |
+
|
27 |
+
### Dataset Sources
|
28 |
+
|
29 |
+
<!-- Provide the basic links for the dataset. -->
|
30 |
+
|
31 |
+
- **Repository:** https://github.com/IDEA-XL/MoleculeQA
|
32 |
+
- **Paper [optional]:** https://arxiv.org/abs/2403.08192
|
33 |
+
|
34 |
+
|
35 |
+
## Dataset Structure
|
36 |
+
|
37 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
38 |
+
|
39 |
+
```
|
40 |
+
- JSON
|
41 |
+
- All
|
42 |
+
- train.json # 49,993
|
43 |
+
- valid.json # 5,795
|
44 |
+
- test.json # 5,786
|
45 |
+
- TXT
|
46 |
+
- All
|
47 |
+
- train.txt
|
48 |
+
- valid.txt
|
49 |
+
- test.txt
|
50 |
+
- Property
|
51 |
+
- Source
|
52 |
+
- Structure
|
53 |
+
- Usage
|
54 |
+
```
|
55 |
+
|
56 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63458f173cc8a5caf9b84e48/gr1PDjhOXP-6c7Z8KaAMb.png)
|
57 |
+
|
58 |
+
|
59 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63458f173cc8a5caf9b84e48/QELSG-259d4o1ByD-hi4H.png)
|
60 |
+
|
61 |
+
## Dataset Creation
|
62 |
+
|
63 |
+
### Curation Rationale
|
64 |
+
|
65 |
+
<!-- Motivation for the creation of this dataset. -->
|
66 |
+
Large language models are playing an increasingly significant role in molecular research, yet existing models often generate erroneous information. Traditional evaluations fail to assess a
|
67 |
+
model’s factual correctness. To rectify this absence, we present MoleculeQA1, a novel question answering (QA) dataset which possesses
|
68 |
+
62K QA pairs over 23K molecules. Each QA
|
69 |
+
pair, composed of a manual question, a positive option and three negative options, has consistent semantics with a molecular description
|
70 |
+
from authoritative corpus. MoleculeQA is not
|
71 |
+
only the first benchmark to evaluate molecular
|
72 |
+
factual correctness but also the largest molecular QA dataset. A comprehensive evaluation on
|
73 |
+
MoleculeQA for existing molecular LLMs exposes their deficiencies in specific aspects and
|
74 |
+
pinpoints crucial factors for molecular modeling. Furthermore, we employ MoleculeQA
|
75 |
+
in reinforcement learning to mitigate model
|
76 |
+
hallucinations, thereby enhancing the factual
|
77 |
+
correctness of generated information.
|
78 |
+
|
79 |
+
### Source Data
|
80 |
+
|
81 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
82 |
+
|
83 |
+
|
84 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63458f173cc8a5caf9b84e48/qbOw0mIWTztzZhbkWn0Tk.png)
|
85 |
+
|
86 |
+
#### Data Collection and Processing
|
87 |
+
|
88 |
+
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
89 |
+
|
90 |
+
|
91 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63458f173cc8a5caf9b84e48/FqkfVhXeMJ6vaoY6Utqdp.png)
|
92 |
+
|
93 |
+
|
94 |
+
## Citation
|
95 |
+
**BibTeX:**
|
96 |
+
```
|
97 |
+
@inproceedings{lu-etal-2024-moleculeqa,
|
98 |
+
title = "{M}olecule{QA}: A Dataset to Evaluate Factual Accuracy in Molecular Comprehension",
|
99 |
+
author = "Lu, Xingyu and
|
100 |
+
Cao, He and
|
101 |
+
Liu, Zijing and
|
102 |
+
Bai, Shengyuan and
|
103 |
+
Chen, Leqing and
|
104 |
+
Yao, Yuan and
|
105 |
+
Zheng, Hai-Tao and
|
106 |
+
Li, Yu",
|
107 |
+
editor = "Al-Onaizan, Yaser and
|
108 |
+
Bansal, Mohit and
|
109 |
+
Chen, Yun-Nung",
|
110 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
|
111 |
+
month = nov,
|
112 |
+
year = "2024",
|
113 |
+
address = "Miami, Florida, USA",
|
114 |
+
publisher = "Association for Computational Linguistics",
|
115 |
+
url = "https://aclanthology.org/2024.findings-emnlp.216",
|
116 |
+
pages = "3769--3789",
|
117 |
+
abstract = "Large language models are playing an increasingly significant role in molecular research, yet existing models often generate erroneous information. Traditional evaluations fail to assess a model{'}s factual correctness. To rectify this absence, we present MoleculeQA, a novel question answering (QA) dataset which possesses 62K QA pairs over 23K molecules. Each QA pair, composed of a manual question, a positive option and three negative options, has consistent semantics with a molecular description from authoritative corpus. MoleculeQA is not only the first benchmark to evaluate molecular factual correctness but also the largest molecular QA dataset. A comprehensive evaluation on MoleculeQA for existing molecular LLMs exposes their deficiencies in specific aspects and pinpoints crucial factors for molecular modeling. Furthermore, we employ MoleculeQA in reinforcement learning to mitigate model hallucinations, thereby enhancing the factual correctness of generated information.",
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
## Dataset Card Authors
|
122 |
+
|
123 |
+
[He CAO (CiaoHe)](https://github.com/CiaoHe)
|